152 research outputs found

    The Impact of Stereoscopic 3-D on Visual Short-Term Memory

    Get PDF
    Visual short-term memory has been studied extensively, however nearly all research on this topic has assessed two-dimensional object properties. This is unexpected, given that most individuals perceive the visual environment in three-dimensions. In the experiments reported here, I investigate the stimuli necessary to assess visual short-term memory while eliminating potential confounds: the use of verbal memory to encode visual information, and the unintentional use of mental resources directed at irrelevant aspects of the memory task. I assess the impact of the amount of disparity, and the distribution of elements in depth, on visual short-term memory. Individuals retain simple visual stimuli equivalently when information is displayed in 2-D or 3-D, regardless of how objects are distributed in 3-D. Conversely, ease of encoding does influence visual short-term memory. Tasks that facilitate encoding result in better visual short-term memory performance. The experiments reported show that stereoscopic 3-D does not improve visual short-term memory

    A straightforward meta-analysis approach for oncology phase I dose-finding studies

    Full text link
    Phase I early-phase clinical studies aim at investigating the safety and the underlying dose-toxicity relationship of a drug or combination. While little may still be known about the compound's properties, it is crucial to consider quantitative information available from any studies that may have been conducted previously on the same drug. A meta-analytic approach has the advantages of being able to properly account for between-study heterogeneity, and it may be readily extended to prediction or shrinkage applications. Here we propose a simple and robust two-stage approach for the estimation of maximum tolerated dose(s) (MTDs) utilizing penalized logistic regression and Bayesian random-effects meta-analysis methodology. Implementation is facilitated using standard R packages. The properties of the proposed methods are investigated in Monte-Carlo simulations. The investigations are motivated and illustrated by two examples from oncology.Comment: 30 pages, 11 figures, 8 table

    A Bayesian dose-finding design for drug combination clinical trials based on the logistic model

    Get PDF
    International audienceIn early phase dose-finding cancer studies, the objective is to determine the maximum tolerated dose, defined as the highest dose with an acceptable dose-limiting toxicity rate. Finding this dose for drug-combination trials is complicated because of drug–drug interactions, and many trial designs have been proposed to address this issue. These designs rely on complicated statistical models that typically are not familiar to clinicians, and are rarely used in practice. The aim of this paper is to propose a Bayesian dose-finding design for drug combination trials based on standard logistic regression. Under the proposed design, we continuously update the posterior estimates of the model parameters to make the decisions of dose assignment and early stopping. Simulation studies show that the proposed design is competitive and outperforms some existing designs. We also extend our design to handle delayed toxicities. Copyright © 2014 John Wiley & Sons, Ltd

    Personalized Dynamic Treatment Regimes in Continuous Time: A Bayesian Approach for Optimizing Clinical Decisions with Timing

    Full text link
    Accurate models of clinical actions and their impacts on disease progression are critical for estimating personalized optimal dynamic treatment regimes (DTRs) in medical/health research, especially in managing chronic conditions. Traditional statistical methods for DTRs usually focus on estimating the optimal treatment or dosage at each given medical intervention, but overlook the important question of "when this intervention should happen." We fill this gap by developing a two-step Bayesian approach to optimize clinical decisions with timing. In the first step, we build a generative model for a sequence of medical interventions-which are discrete events in continuous time-with a marked temporal point process (MTPP) where the mark is the assigned treatment or dosage. Then this clinical action model is embedded into a Bayesian joint framework where the other components model clinical observations including longitudinal medical measurements and time-to-event data conditional on treatment histories. In the second step, we propose a policy gradient method to learn the personalized optimal clinical decision that maximizes the patient survival by interacting the MTPP with the model on clinical observations while accounting for uncertainties in clinical observations learned from the posterior inference of the Bayesian joint model in the first step. A signature application of the proposed approach is to schedule follow-up visitations and assign a dosage at each visitation for patients after kidney transplantation. We evaluate our approach with comparison to alternative methods on both simulated and real-world datasets. In our experiments, the personalized decisions made by the proposed method are clinically useful: they are interpretable and successfully help improve patient survival

    Designing a paediatric study for an antimalarial drug including prior information from adults

    Get PDF
    International audienceThe objectives of this study were to design a pharmacokinetic (PK) study by using information about adults and evaluate the robustness of the recommended design through a case study of mefloquine. PK data about adults and children were available from two different randomized studies of the treatment of malaria with the same artesunate-mefloquine combination regimen. A recommended design for pediatric studies of mefloquine was optimized on the basis of an extrapolated model built from adult data through the following approach. (i) An adult PK model was built, and parameters were estimated by using the stochastic approximation expectation-maximization algorithm. (ii) Pediatric PK parameters were then obtained by adding allometry and maturation to the adult model. (iii) A D-optimal design for children was obtained with PFIM by assuming the extrapolated design. Finally, the robustness of the recommended design was evaluated in terms of the relative bias and relative standard errors (RSE) of the parameters in a simulation study with four different models and was compared to the empirical design used for the pediatric study. Combining PK modeling, extrapolation, and design optimization led to a design for children with five sampling times. PK parameters were well estimated by this design with few RSE. Although the extrapolated model did not predict the observed mefloquine concentrations in children very accurately, it allowed precise and unbiased estimates across various model assumptions, contrary to the empirical design. Using information from adult studies combined with allometry and maturation can help provide robust designs for pediatric studies

    Value of information methods to design a clinical trial in a small population to optimise a health economic utility function

    Get PDF
    Background: Most confirmatory randomised controlled clinical trials (RCTs) are designed with specified power, usually 80% or 90%, for a hypothesis test conducted at a given significance level, usually 2.5% for a one-sided test. Approval of the experimental treatment by regulatory agencies is then based on the result of such a significance test with other information to balance the risk of adverse events against the benefit of the treatment to future patients. In the setting of a rare disease, recruiting sufficient patients to achieve conventional error rates for clinically reasonable effect sizes may be infeasible, suggesting that the decision-making process should reflect the size of the target population. Methods: We considered the use of a decision-theoretic value of information (VOI) method to obtain the optimal sample size and significance level for confirmatory RCTs in a range of settings. We assume the decision maker represents society. For simplicity we assume the primary endpoint to be normally distributed with unknown mean following some normal prior distribution representing information on the anticipated effectiveness of the therapy available before the trial. The method is illustrated by an application in an RCT in haemophilia A. We explicitly specify the utility in terms of improvement in primary outcome and compare this with the costs of treating patients, both financial and in terms of potential harm, during the trial and in the future. Results: The optimal sample size for the clinical trial decreases as the size of the population decreases. For non-zero cost of treating future patients, either monetary or in terms of potential harmful effects, stronger evidence is required for approval as the population size increases, though this is not the case if the costs of treating future patients are ignored. Conclusions: Decision-theoretic VOI methods offer a flexible approach with both type I error rate and power (or equivalently trial sample size) depending on the size of the future population for whom the treatment under investigation is intended. This might be particularly suitable for small populations when there is considerable information about the patient population

    Approaches to sample size calculation for clinical trials in rare diseases

    Get PDF
    We discuss 3 alternative approaches to sample size calculation: traditional sample size calculation based on power to show a statistically significant effect, sample size calculation based on assurance, and sample size based on a decision-theoretic approach. These approaches are compared head-to-head for clinical trial situations in rare diseases. Specifically, we consider 3 case studies of rare diseases (Lyell disease, adult-onset Still disease, and cystic fibrosis) with the aim to plan the sample size for an upcoming clinical trial. We outline in detail the reasonable choice of parameters for these approaches for each of the 3 case studies and calculate sample sizes. We stress that the influence of the input parameters needs to be investigated in all approaches and recommend investigating different sample size approaches before deciding finally on the trial size. Highly influencing for the sample size are choice of treatment effect parameter in all approaches and the parameter for the additional cost of the new treatment in the decision-theoretic approach. These should therefore be discussed extensively

    Développement d'une méthode de recherche de dose modélisant un score de toxicité pour les essais cliniques de phase I en Oncologie

    Get PDF
    Le but principal d'un essai de phase I en oncologie est d'identifier, parmi un nombre fini de doses, la dose à recommander d'un nouveau traitement pour les évaluations ultérieures, sur un petit nombre de patients.Le critÚre de jugement principal est classiquement la toxicité. Bien que la toxicité soit mesurée pour différents organes sur une échelle gradée, elle est généralement réduite à un indicateur binaire appelé "toxicité dose-limitante" (DLT). Cette simplification trÚs réductrice est problématiqu, en particulier pour les thérapies, dites "thérapies ciblées", associées à peu de DLTs.Dans ce travail, nous proposons un score de toxicité qui résume l'ensemble des toxicités observées chez un patient. Ce score, appelé TTP pour Total Toxicity Profile, est défini par la norme euclidienne des poids associés aux différents types et grades de toxicités possibles. Les poids reflÚtent l'importance clinique des différentes toxicités.\\ Ensuite, nous proposons la méthode de recherche de dose, QLCRM pour Quasi-Likelihood Continual Reassessment Method, modélisant la relation entre la dose et le score de toxicité TTP à l'aide d'une régression logistique dans un cadre fréquentiste.A l'aide d'une étude de simulation, nous comparons la performance de cette méthode à celle de trois autres approches utilisant un score de toxicité : i) la méthode de Yuan et al. (QCRM) basée sur un modÚle empirique pour estimer, dans un cadre bayésien, la relation entre la dose et le score, ii) la méthode d'Ivanova et Kim (UA) dérivée des méthodes algorithmiques et utilisant une régression isotonique pour estimer la dose à recommander en fin d'essai, iii) la méthode de Chen et al. (EID) basée sur une régression isotonique pour l'escalade de dose et l'identification de la dose à recommander. Nous comparons ensuite ces quatre méthodes utilisant le score de toxicité aux méthodes CRM basées sur le critÚre binaire DLT. Nous étudions également l'impact de l'erreur de classement des grades pour les différentes méthodes, guidées par le score de toxicité ou par la DLT.Enfin, nous illustrons le processus de construction du score de toxicité ainsi que l'application de la méthode QLCRM dans un essai réel de phase I. Dans cette application, nous avons utilisé une approche Delphi pour déterminer avec les cliniciens la matrice des poids et le score de toxicité jugé acceptable.Les méthodes QLCRM, QCRM, UA et EID présentent une bonne performance en termes de capacité à identifier correctement la dose à recommander et de contrÎle du surdosage. Dans un essai incluant 36 patients, le pourcentage de sélection correcte de la dose à recommander obtenu avec les méthodes QLCRM et QCRM varie de 80 à 90% en fonction des situations. Les méthodes basées sur le score TTP sont plus performantes et plus robustes aux erreurs de classement des grades que les méthodes CRM basées sur le critÚre binaire DLT.Dans l'application rétrospective, le processus de construction du score apparaßt faisable facilement. Cette étude nous a conduits à proposer des recommandations pour guider les investigateurs et faciliter l'utilisation de cette approche dans la pratique.En conclusion, la méthode QLCRM prenant en compte l'ensemble des toxicités s'avÚre séduisante pour les essais de phase I évaluant des médicaments associés à peu de DLTs a priori, mais avec des toxicités multiples modérées probables.The aim of a phase I oncology trial is to identify a dose with an acceptable safety level. Most phase I designs use the Dose-Limiting Toxicity (DLT), a binary endpoint, to assess the level of toxicity. DLT might be an incomplete endpoint for investigating molecularly targeted therapies as a lot of useful toxicity information is discarded.In this work, we propose a quasi-continuous toxicity score, the Total Toxicity Profile (TTP), to measure quantitatively and comprehensively the overall burden of multiple toxicities. The TTP is defined as the Euclidean norm of the weights of toxicities experienced by a patient, where the weights reflect the relative clinical importance of each type and grade of toxicity.We propose then a dose-finding design, the Quasi-Likelihood Continual Reassessment Method (QLCRM), incorporating the TTP-score into the CRM, with a logistic model for the dose-toxicity relationship in a frequentist framework. Using simulations, we compare our design to three existing designs for quasi-continuous toxicity scores: i) the QCRM design, proposed by Yuan et al., with an empiric model for the dose-toxicity relationship in a Bayesian framework, ii) the UA design of Ivanova and Kim derived from the "up-and-down" methods for the dose-escalation process and using an isotonic regression to estimate the recommended dose at the end of the trial, and iii) the EID design of Chen et al. using the isotonic regression for the dose-escalation process and for the identification of the recommended dose.We also perform a simulation study to evaluate the TTP-driven methods in comparison to the classical DLT-driven CRM. We then evaluate the robustness of these designs in a setting where grades can be misclassified.In the last part of this work, we illustrate the process of building the TTP-score and the application of the QLCRM method through the example of a paediatric trial. In this study, we have used the Delphi method to elicit the weights and the target toxicity-score considered as an acceptable toxicity measure.All designs using the TTP-score to identify the recommended dose had good performance characteristics for most scenarios, with good overdosing control. For a sample size of 36, the percentage of correct selection for the QLCRM ranged from 80 to 90%, with similar results for the QCRM design. Simulation study demonstrates also that score-driven designs present an improved performance and robustness compared to conventional DLT-driven designs. In the retrospective application of erlotinib trial, the consensus weights as well as the target-TTP were easily obtained, confirming the feasibility of the process. Some guidelines to facilitate the process in a real clinical trial for a better practice of this approach are suggested.The QLCRM method based on the TTP-endpoint combining multiple graded toxicities is an appealing alternative to the conventional dose-finding designs, especially in the context of molecularly targeted agents.PARIS11-SCD-Bib. électronique (914719901) / SudocSudocFranceF

    Does the low prevalence affect the sample size of interventional clinical trials of rare diseases? An analysis of data from the aggregate analysis of clinicaltrials.gov

    Get PDF
    Background Clinical trials are typically designed using the classical frequentist framework to constrain type I and II error rates. Sample sizes required in such designs typically range from hundreds to thousands of patients which can be challenging for rare diseases. It has been shown that rare disease trials have smaller sample sizes than non-rare disease trials. Indeed some orphan drugs were approved by the European Medicines Agency based on studies with as few as 12 patients. However, some studies supporting marketing authorisation included several hundred patients. In this work, we explore the relationship between disease prevalence and other factors and the size of interventional phase 2 and 3 rare disease trials conducted in the US and/or EU. We downloaded all clinical trials from Aggregate Analysis of ClinialTrials.gov (AACT) and identified rare disease trials by cross-referencing MeSH terms in AACT with the list from Orphadata. We examined the effects of prevalence and phase of study in a multiple linear regression model adjusting for other statistically significant trial characteristics. Results Of 186941 ClinicalTrials.gov trials only 1567 (0.8%) studied a single rare condition with prevalence information from Orphadata. There were 19 (1.2%) trials studying disease with prevalence <1/1,000,000, 126 (8.0%) trials with 1–9/1,000,000, 791 (50.5%) trials with 1–9/100,000 and 631 (40.3%) trials with 1–5/10,000. Of the 1567 trials, 1160 (74%) were phase 2 trials. The fitted mean sample size for the rarest disease (prevalence <1/1,000,000) in phase 2 trials was the lowest (mean, 15.7; 95% CI, 8.7–28.1) but were similar across all the other prevalence classes; mean, 26.2 (16.1–42.6), 33.8 (22.1–51.7) and 35.6 (23.3–54.3) for prevalence 1–9/1,000,000, 1–9/100,000 and 1–5/10,000, respectively. Fitted mean size of phase 3 trials of rarer diseases, <1/1,000,000 (19.2, 6.9–53.2) and 1–9/1,000,000 (33.1, 18.6–58.9), were similar to those in phase 2 but were statistically significant lower than the slightly less rare diseases, 1–9/100,000 (75.3, 48.2–117.6) and 1-5/10,000 (77.7, 49.6–121.8), trials. Conclusions We found that prevalence was associated with the size of phase 3 trials with trials of rarer diseases noticeably smaller than the less rare diseases trials where phase 3 rarer disease (prevalence <1/100,000) trials were more similar in size to those for phase 2 but were larger than those for phase 2 in the less rare disease (prevalence ≄1/100,000) trials
    • 

    corecore